skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhang, Lijun"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 9, 2025
  2. Free, publicly-accessible full text available December 9, 2025
  3. Free, publicly-accessible full text available December 9, 2025
  4. Free, publicly-accessible full text available December 9, 2025
  5. Nafion, a widely used proton exchange membrane in fuel cells, is a representative perfluorosulfonic acid membrane consisting of a hydrophobic Teflon backbone and hydrophilic sulfonic acid side chains. Its thermal conductivity (k) is critical to fuel cell's thermal management. During fuel cell operation, water molecules inevitably enter Nafion and could strongly affect its k. In this work, we measure the k of Nafion of different water content (λ). Findings reveal that k is significantly low in a vacuum environment characterized as 0.110 W m−1 K−1, but at λ ∼1, a notable increase is observed, reaching 0.162 W m−1 K−1. Moreover, k at λ ≈ 6 is 60% higher than that of λ ∼1. This exceptional k increase is far beyond the theoretical prediction by the effective medium theory that only considers simply physical mixing. Rather this k increase is attributed to the formation of water clusters and channels with increased λ, creating thermal pathways through hydrogen bonding, thereby improving chemical connections within the Nafion structure and augmenting its k. Furthermore, it is observed that Nafion's k reaches the maximum value of 0.256 W m−1 K−1 at λ ≈ 6, with no further increase up to λ ≈ 10.5. This phenomenon is explained by the coalescence of water clusters at λ ≈ 6, forming channels that optimize heat transfer pathways and connections within the Nafion structure. Moreover, the free movement of water molecules within water channels (λ > 6) shows physical alterations in Nafion structure (significant volume increase), which have a lesser impact on k. 
    more » « less
  6. AI-powered applications often involve multiple deep neural network (DNN)-based prediction tasks to support application level functionalities. However, executing multi-DNNs can be challenging due to the high resource demands and computation costs that increase linearly with the number of DNNs. Multi-task learning (MTL) addresses this problem by designing a multi-task model that shares parameters across tasks based on a single backbone DNN. This paper explores an alternative approach called model fusion: rather than training a single multi-task model from scratch as MTL does, model fusion fuses multiple task-specific DNNs that are pre-trained separately and can have heterogeneous architectures into a single multi-task model. We materialize model fusion in a software framework called GMorph to accelerate multi- DNN inference while maintaining task accuracy. GMorph features three main technical contributions: graph mutations to fuse multi-DNNs into resource-efficient multi-task models, search-space sampling algorithms, and predictive filtering to reduce the high search costs. Our experiments show that GMorph can outperform MTL baselines and reduce the inference latency of multi-DNNs by 1.1-3X while meeting the target task accuracy. 
    more » « less
  7. AI-powered applications often involve multiple deep neural network (DNN)-based prediction tasks to support application level functionalities. However, executing multi-DNNs can be challenging due to the high resource demands and computation costs that increase linearly with the number of DNNs. Multi-task learning (MTL) addresses this problem by designing a multi-task model that shares parameters across tasks based on a single backbone DNN. This paper explores an alternative approach called model fusion: rather than training a single multi-task model from scratch as MTL does, model fusion fuses multiple task-specific DNNs that are pre-trained separately and can have heterogeneous architectures into a single multi-task model. We materialize model fusion in a software framework called GMorph to accelerate multi- DNN inference while maintaining task accuracy. GMorph features three main technical contributions: graph mutations to fuse multi-DNNs into resource-efficient multi-task models, search-space sampling algorithms, and predictive filtering to reduce the high search costs. Our experiments show that GMorph can outperform MTL baselines and reduce the inference latency of multi-DNNs by 1.1-3X while meeting the target task accuracy. 
    more » « less